113 research outputs found

    Making use of partial knowledge about hidden states in HMMs : an approach based on belief functions.

    No full text
    International audienceThis paper addresses the problem of parameter estimation and state prediction in Hidden Markov Models (HMMs) based on observed outputs and partial knowledge of hidden states expressed in the belief function framework. The usual HMM model is recovered when the belief functions are vacuous. Parameters are learnt using the Evidential Expectation- Maximization algorithm, a recently introduced variant of the Expectation-Maximization algorithm for maximum likelihood estimation based on uncertain data. The inference problem, i.e., finding the most probable sequence of states based on observed outputs and partial knowledge of states, is also addressed. Experimental results demonstrate that partial information about hidden states, when available, may substantially improve the estimation and prediction performances

    Parameter identification in Choquet Integral by the Kullback-Leibler diversgence on continuous densities with application to classification fusion.

    No full text
    International audienceClassifier fusion is a means to increase accuracy and decision-making of classification systems by designing a set of basis classifiers and then combining their outputs. The combination is made up by non linear functional dependent on fuzzy measures called Choquet integral. It constitues a vast family of aggregation operators including minimum, maximum or weighted sum. The main issue before applying the Choquet integral is to identify the 2M − 2 parameters for M classifiers. We follow a previous work by Kojadinovic and one of the authors where the identification is performed using an informationtheoritic approach. The underlying probability densities are made smooth by fitting continuous parametric and then the Kullback-Leibler divergence is used to identify fuzzy measures. The proposed framework is applied on widely used datasets

    Prognostics in switching systems: Evidential markovian classification of real-time neuro-fuzzy predictions.

    No full text
    International audienceCondition-based maintenance is nowadays considered as a key-process in maintenance strategies and prognostics appears to be a very promising activity as it should permit to not engage inopportune spending. Various approaches have been developed and data-driven methods are increasingly applied. The training step of these methods generally requires huge datasets since a lot of methods rely on probability theory and/or on artificial neural networks. This step is thus time-consuming and generally made in batch mode which can be restrictive in practical application when few data are available. A method for prognostics is proposed to face up this problem of lack of information and missing prior knowledge. The approach is based on the integration of three complementary modules and aims at predicting the failure mode early while the system can switch between several functioning modes. The three modules are: 1) observation selection based on information theory and Choquet Integral, 2) prediction relying on an evolving real-time neuro-fuzzy system and 3) classification into one of the possible functioning modes using an evidential Markovian classifier based on Dempster-Shafer theory. Experiments concern the prediction of an engine health based on more than twenty observations

    Remaining Useful Life Estimation by ClassiïŹcation of Predictions Based on a Neuro-Fuzzy System and Theory of Belief Functions.

    No full text
    International audienceVarious approaches for prognostics have been developed, and data-driven methods are increasingly applied. The training step of these methods generally requires huge datasets to build a model of the degradation signal, and estimate the limit under which the degradation signal should stay. Applicability and accuracy of these methods are thereby closely related to the amount of available data, and even sometimes requires the user to make assumptions on the dynamics of health states evolution. Following that, the aim of this paper is to propose a method for prognostics and remaining useful life estimation that starts from scratch, without any prior knowledge. Assuming that remaining useful life can be seen as the time between the current time and the instant where the degradation is above an acceptable limit, the proposition is based on a classification of prediction strategy (CPS) that relies on two factors. First, it relies on the use of an evolving real-time neuro-fuzzy system that forecasts observations in time. Secondly, it relies on the use of an evidential Markovian classifier based on Dempster-Shafer theory that enables classifying observations into the possible functioning modes. This approach has the advantage to cope with a lack of data using an evolving system, and theory of belief functions. Also, one of the main assets is the possibility to train the prognostic system without setting any threshold. The whole proposition is illustrated and assessed by using the CMAPPS turbofan dataset. RUL estimates are shown to be very close to actual values, and the approach appears to accurately estimate the failure instants, even with few learning data

    From real data to remaining useful life estimation : an approach combining neuro-fuzzy predictions and evidential Markovian classifications.

    No full text
    International audienceThis paper deals with the proposition of a prognostic approach that enables to face up to the problem of lack of information and missing prior knowledge. Developments rely on the assumption that real data can be gathered from the system (online). The approach consists in three phases. An information theory-based criterion is first used to isolate the most useful observations with regards to the functioning modes of the system (feature selection step). An evolving neuro-fuzzy system is then used for online prediction of observations at any horizons (prediction step). The predicted observations are classified into the possible functioning modes using an evidential Markovian classifier based on Dempster-Shafer theory (classification step). The whole is illustrated on a problem concerning the prediction of an engine health. The approach appears to be very efficient since it enables to early but accurately estimate the failure instant, even with few learning data

    Strategies to face imbalanced and unlabelled data in PHM applications.

    No full text
    International audienceAccuracy and usefulness of learned data-driven PHM models are closely related to availability and representativeness of data. Notably, two particular problems can be pointed out. First, how to improve the performances of learning algorithms in presence of underrepresented data and severe class distribution skews? This is often the case in PHM applications where faulty data can be hard (even dangerous) to gather, and can be sparsely distributed accordingly to the solicitations and failure modes. Secondly, how to cope with unlabelled data? Indeed, in many PHM problems, health states and transitions between states are not well defined, which leads to imprecision and uncertainty challenges. According to all this, the purpose of this paper is to address the problem of "learning PHM models when data are imbalanced and/or unlabelled" by proposing two types of learning schemes to face it. Imbalanced and unlabelled data are first defined and illustrated, and a taxonomy of PHM problems is proposed. The aim of this classification is to rank the difficulty of developing PHM models with respect to representativeness of data. Following that, two strategies are proposed as pieces of solution to cope with imbalanced and unlabeled data. The first one aims at going through very fast and/or evolving algorithms. This kind of training scheme enables repeating the learning phase in order to manage state discovery (as new data are available), notably when data are imbalanced. The second strategy aims at dealing with incompleteness and uncertainty of labels by taking advantage of partially-supervised training approaches. This enables taking into account some a priori knowledge and managing noise on labels. Both strategies are proposed as to improve robustness and reliability of estimates

    Joint prediction of observations and states in time-series based on belief functions

    No full text
    International audienceForecasting the future states of a complex system is a complicated challenge that is encountered in many industrial applications covered in the community of Prognostics and Health Management (PHM). Practically, states can be either continuous or discrete: Continuous states generally represent the value of a signal while discrete states generally depict functioning modes reflecting the current degradation. For each case, specific techniques exist. In this paper, we propose an approach based on case-based reasoning that jointly estimates the future values of the continuous signal and the future discrete modes. The main characteristics of the proposed approach are the following: 1) It relies on the K-nearest neighbours algorithm based on belief functions theory; 2) Belief functions allow the user to represent his partial knowledge concerning the possible states in the training dataset, in particular concerning transitions between functioning modes which are imprecisely known; 3) Two distinct strategies are proposed for states prediction and the fusion of both strategies is also considered. Two real datasets were used in order to assess the performance in estimating future break-down of a real system

    E2GK : Evidential evolving Gustafsson-Kessel algorithm for data streams partitioning using belief functions.

    No full text
    International audienceA new online clustering method, called E2GK (Evidential Evolving Gustafson-Kessel) is introduced in the theoretical framework of belief functions. The algorithm enables an online partitioning of data streams based on two existing and e cient algorithms: Evidantial c- Means (ECM) and Evolving Gustafson-Kessel (EGK). E2GK uses the concept of credal partition of ECM and adapts EGK, o ering a better interpretation of the data structure. Experiments with synthetic data sets show good performances of the proposed algorithm compared to the original online procedure

    Lymphocite segmentation using mixture of Gaussians and the transferable belief model.

    No full text
    International audienceIn the context of several pathologies, the presence of lym- phocytes has been correlated with disease outcome. The ability to au- tomatically detect lymphocyte nuclei on histopathology imagery could potentially result in the development of an image based prognostic tool. In this paper we present a method based on the estimation of a mixture of Gaussians for determining the probability distribution of the princi- pal image component. Then, a post-processing stage eliminates regions, whose shape is not similar to the nuclei searched. Finally, the Transfer- able Belief Model is used to detect the lymphocyte nuclei, and a shape based algorithm possibly splits them under an equal area and an eccen- tricity constraint principle
    • 

    corecore